婴儿生命的最初几年被称为关键时期,在此期间,由于神经可塑性,学习绩效的总体发展受到显着影响。在最近的研究中,具有深层神经网络模仿实际神经元的深层神经网络的AI药物表现出与人类关键时期类似的学习期。特别是在此初期,适当的刺激在发展学习能力中起着至关重要的作用。但是,将人类的认知偏见转变为适当的塑造奖励是非常具有挑战性的,并且在关键时期的先前工作并不集中于寻找适当的刺激。为了进一步迈出一步,我们建议多阶段的增强学习强调在关键时期发现``适当的刺激''。受到人类早期认知发展阶段的启发,我们在关键时期附近使用多阶段的指导,并证明就AI代理的性能,效率和稳定性而言,适当的成型奖励(2阶段指导)。
translated by 谷歌翻译
Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.
translated by 谷歌翻译
Through in-context learning (ICL), large-scale language models are effective few-shot learners without additional model fine-tuning. However, the ICL performance does not scale well with the number of available training samples as it is limited by the inherent input length constraint of the underlying language model. Meanwhile, many studies have revealed that language models are also powerful feature extractors, allowing them to be utilized in a black-box manner and enabling the linear probing paradigm, where lightweight discriminators are trained on top of the pre-extracted input representations. This paper proposes prompt-augmented linear probing (PALP), a hybrid of linear probing and ICL, which leverages the best of both worlds. PALP inherits the scalability of linear probing and the capability of enforcing language models to derive more meaningful representations via tailoring input into a more conceivable form. Throughout in-depth investigations on various datasets, we verified that PALP significantly enhances the input representations closing the gap between ICL in the data-hungry scenario and fine-tuning in the data-abundant scenario with little training overhead, potentially making PALP a strong alternative in a black-box scenario.
translated by 谷歌翻译
Dialogue systems can leverage large pre-trained language models and knowledge to generate fluent and informative responses. However, these models are still prone to produce hallucinated responses not supported by the input source, which greatly hinders their application. The heterogeneity between external knowledge and dialogue context challenges representation learning and source integration, and further contributes to unfaithfulness. To handle this challenge and generate more faithful responses, this paper presents RHO ($\rho$) utilizing the representations of linked entities and relation predicates from a knowledge graph (KG). We propose (1) local knowledge grounding to combine textual embeddings with the corresponding KG embeddings; and (2) global knowledge grounding to equip RHO with multi-hop reasoning abilities via the attention mechanism. In addition, we devise a response re-ranking technique based on walks over KG sub-graphs for better conversational reasoning. Experimental results on OpenDialKG show that our approach significantly outperforms state-of-the-art methods on both automatic and human evaluation by a large margin, especially in hallucination reduction (17.54% in FeQA).
translated by 谷歌翻译
We propose a domain adaptation method, MoDA, which adapts a pretrained embodied agent to a new, noisy environment without ground-truth supervision. Map-based memory provides important contextual information for visual navigation, and exhibits unique spatial structure mainly composed of flat walls and rectangular obstacles. Our adaptation approach encourages the inherent regularities on the estimated maps to guide the agent to overcome the prevalent domain discrepancy in a novel environment. Specifically, we propose an efficient learning curriculum to handle the visual and dynamics corruptions in an online manner, self-supervised with pseudo clean maps generated by style transfer networks. Because the map-based representation provides spatial knowledge for the agent's policy, our formulation can deploy the pretrained policy networks from simulators in a new setting. We evaluate MoDA in various practical scenarios and show that our proposed method quickly enhances the agent's performance in downstream tasks including localization, mapping, exploration, and point-goal navigation.
translated by 谷歌翻译
在过去的十年中,我们看到了工业数据,计算能力的巨大改善以及机器学习的重大理论进步。这为在大规模非线性监控和控制问题上使用现代机器学习工具提供了机会。本文对过程行业的应用进行了对最新结果的调查。
translated by 谷歌翻译
在未来几十年中部署的高级反应堆将面临放松管制的能源市场,并可能采用灵活的运营来提高盈利能力。为了帮助从基本负载到柔性操作范式的过渡,寻求自动操作。这项工作着重于自主操作的控制方面。具体而言,层次控制系统旨在支持常规操作瞬变期间的约束执法。在系统中,集成了数据驱动的建模,基于物理的状态观察和经典控制算法,以提供适应性和健壮的解决方案。 320 MW氟化物冷却的高温卵石床反应器是证明控制系统的设计基础。分层控制系统由监督层和低级层组成。监督层收到更改系统操作条件的请求,并根据已分配的约束接受或拒绝它们。发出限制条件以使工厂保持最佳操作区域。低级层与系统的执行器接口,以实现要求的更改,同时保持跟踪和调节职责。为了接受监督层的请求,采用了参考调查算法。为了建模反应器的动力学,使用了系统识别算法,动态模式分解。为了估计无法直接测量的过程变量的演变,采用了无味的卡尔曼滤波器,并结合了核动力学的非线性模型。这些算法的组成导致了40%功率降低瞬变期间约束执法的数值证明。通过修改约束值并在瞬态期间执行这些系统来证明所提出系统的适应性。在嘈杂的环境下执行约束也证明了鲁棒性。
translated by 谷歌翻译
尽管变形金刚在段落的生成中取得了重大成功,但它们将句子视为令牌的线性序列,并且经常忽略其层次结构信息。先前的工作表明,输入令牌分解粒度〜(例如,单词,短语或句子)的水平已产生实质性改进,这表明可以通过更细粒度的粒度建模来增强变形金刚。在这项工作中,我们提出了粒度生成(C-DNPG)的粒度连续分解。为了有效地将粒度纳入编码句子中,C-DNPG引入了一种粒度感知的注意力(GA-注意)机制,该机制扩展了多头自我注意力,以:1)自动渗透句子的粒度头,该机制自动渗透了句子的等级结构通过神经估计每个输入令牌的粒度水平; 2)两个新的注意力面膜,即粒度共振和粒度范围,以有效地将粒度编码为注意力。在两个基准测试的实验(包括Quora问题对和Twitter URL)上表明,C-DNPG的表现优于基线模型,而在许多指标方面,C-DNPG的基线模型优于基线模型。定性分析表明,C-DNPG确实具有有效性捕获细粒度的粒度水平。
translated by 谷歌翻译
最近,Graph神经网络(GNNS)已成为聚光灯作为强大的工具,可以有效地在图形结构化数据上执行各种推理任务。随着现实图表的大小继续扩展,GNN训练系统面临可扩展性挑战。分布式培训是一种流行的方法,可以通过扩展CPU节点来应对这一挑战。但是,对基于磁盘的GNN培训的关注不多,该培训可以通过利用NVME SSD等高性能存储设备来以更具成本效益的方式扩展单节点系统。我们观察到,主内存和磁盘之间的数据移动是基于SSD的训练系统中的主要瓶颈,并且常规的GNN训练管道是不错的选择,而无需考虑此开销。因此,我们提出了Ginex,这是第一个基于SSD的GNN训练系统,可以在单台计算机上处​​理数十亿个图形数据集。受到编译器优化的检查员执行模型的启发,Ginex通过分开样品和收集阶段来重组GNN训练管道。这种分离使Ginex能够实现一种可证明的最佳替换算法,即被称为Belady的算法,用于存储器中的Caching特征向量,该算法是I/O访问的主要部分。根据我们对40亿尺度图数据集的评估,Ginex平均比SSD扩展的Pytorch几何得出了2.11倍的训练吞吐量(最大最高2.67倍)。
translated by 谷歌翻译
我们提出了一个新颖的半监督学习框架,该框架巧妙地利用了模型的预测,从两个强烈的图像观点中的预测之间的一致性正则化,并由伪标签的信心加权,称为conmatch。虽然最新的半监督学习方法使用图像的弱和强烈的观点来定义方向的一致性损失,但如何为两个强大的观点之间的一致性定义定义这种方向仍然没有探索。为了解决这个问题,我们通过弱小的观点作为非参数和参数方法中的锚点来提出从强大的观点中对伪标签的新颖置信度度量。特别是,在参数方法中,我们首次介绍了伪标签在网络中的信心,该网络的信心是以端到端方式通过骨干模型学习的。此外,我们还提出了阶段训练,以提高培训的融合。当纳入现有的半监督学习者中时,并始终提高表现。我们进行实验,以证明我们对最新方法的有效性并提供广泛的消融研究。代码已在https://github.com/jiwoncocoder/conmatch上公开提供。
translated by 谷歌翻译